准确的运动和深度恢复对于包括自动驾驶在内的许多机器人视觉任务很重要。以前的大多数研究都通过预定义的损失函数或跨域预测实现了合作的多任务相互作用。本文提出了一种多任务方案,该方案通过我们的流动深度(F2D),深度流动(D2F)和指数移动平均值(EMA)来实现相互帮助。 F2D和D2F机制可以基于可区分的浅网,可以在光流和深度域之间进行多尺度信息集成。双头机制用于基于分裂方式的刚性和非刚性运动来预测光流,从而显着改善了光流估计的性能。此外,为了使预测更加稳健和稳定,EMA用于我们的多任务培训。 KITTI数据集的实验结果表明,我们的多任务方案优于其他多任务方案,并为预测结果提供了明显的改进。
translated by 谷歌翻译
深度估计是在机器人手术和腹腔镜成像系统中进行图像引导干预的关键步骤。由于对于腹腔镜图像数据很难获得人均深度地面真相,因此很少将监督深度估计应用于手术应用。作为替代方案,已经引入了仅使用同步的立体图像对来训练深度估计器。但是,最近的工作集中在2D中的左右一致性上,而忽略了现实世界坐标中对象的宝贵固有3D信息,这意味着左右3D几何结构一致性尚未得到充分利用。为了克服这一限制,我们提出了M3Depth,这是一种自我监督的深度估计器,以利用3D几何结构信息隐藏在立体声对中,同时保持单眼推理。该方法还消除了在至少一个立体声图像中通过掩码看不见的边界区域的影响,以增强重叠区域中的左图和右图像之间的对应关系。密集实验表明,我们的方法在公共数据集和新获取的数据集上的以前的自我监督方法都大大优先,这表明在不同的样品和腹腔镜上都有良好的概括。
translated by 谷歌翻译
由于医学成像社区缺乏质量注释,半监督学习方法在图像语义分割任务中受到高度重视。在本文中,提出了一种先进的一致性感知伪标签的自我同学方法,以充分利用视觉变压器(VIT)和卷积神经网络(CNN)的力量。我们提出的框架由一个功能学习模块组成,该模块由VIT和CNN相互增强,以及一个适合一致性意识的指导模块。伪标签是通过特征学习模块中的CNN和VIT的视图来重复和分别使用的,以扩展数据集,并且相互有益。同时,为特征学习模块设计了扰动方案,并利用平均网络权重来开发指导模块。通过这样做,该框架结合了CNN和VIT的特征学习强度,通过双视图共同训练增强性能,并以半监督的方式实现一致性的监督。对CNN和VIT的所有替代监督模式进行了拓扑探索,经过详细验证,证明了我们在半监督医学图像分割任务上的最有希望的性能和特定设置。实验结果表明,所提出的方法在带有各种指标的公共基准数据集上实现了最先进的性能。该代码公开可用。
translated by 谷歌翻译
血管内操作中的自主机器人有可能安全可靠地浏览循环系统,同时降低对人体错误的敏感性。但是,训练机器人的过程涉及许多挑战,例如由于机器学习算法的效率低下而导致的长期培训持续时间以及导管与血管内幻影之间的相互作用引起的安全问题。物理模拟器已在血管内手术的背景下使用,但通常用于员工培训,通常不符合自主插管目标。此外,大多数当前的模拟器都是封闭消息,它阻碍了安全可靠的自主系统的协作开发。在这项工作中,我们介绍了Cathsim,Cathsim是一种开源模拟环境,可加快用于自主内血管内导航的机器学习算法的开发。我们首先使用最先进的血管内机器人模拟高保真导管和主动脉。然后,我们在模拟环境中提供了导管和主动脉之间实时力传感的能力。我们通过使用两种流行的强化学习算法,近端策略优化(PPO)和软参与者(SAC)在两个主要动脉内执行两个不同的导管插入任务来验证我们的模拟器。实验结果表明,使用我们的开源模拟器,我们可以成功训练增强型学习剂以执行不同的自主插管任务。
translated by 谷歌翻译
可变形的图像注册,估计不同图像之间的空间转换,是医学成像中的重要任务。许多先前的研究都使用基于学习的方法进行多阶段注册来执行3D图像注册以提高性能。但是,多阶段方法的性能受到不受单个空间尺度上复杂运动的接收场的大小的限制。我们提出了一个新的注册网络,结合了递归网络体系结构和相互注意机制,以克服这些局限性。与最先进的深度学习方法相比,基于递归结构的我们的网络达到了肺计算机断层扫描(CT)数据集的最高精度(肺部的骰子分数为92 \%,肺平均表面距离为3.8mm ),这是腹部CT数据集中最准确的结果之一,具有9个大小的器官(骰子得分为55 \%,平均表面距离为7.8mm)。我们还表明,添加3个递归网络足以达到最新结果,而没有明显增加推理时间。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Digital engineering transformation is a crucial process for the engineering paradigm shifts in the fourth industrial revolution (4IR), and artificial intelligence (AI) is a critical enabling technology in digital engineering transformation. This article discusses the following research questions: What are the fundamental changes in the 4IR? More specifically, what are the fundamental changes in engineering? What is digital engineering? What are the main uncertainties there? What is trustworthy AI? Why is it important today? What are emerging engineering paradigm shifts in the 4IR? What is the relationship between the data-intensive paradigm and digital engineering transformation? What should we do for digitalization? From investigating the pattern of industrial revolutions, this article argues that ubiquitous machine intelligence (uMI) is the defining power brought by the 4IR. Digitalization is a condition to leverage ubiquitous machine intelligence. Digital engineering transformation towards Industry 4.0 has three essential building blocks: digitalization of engineering, leveraging ubiquitous machine intelligence, and building digital trust and security. The engineering design community at large is facing an excellent opportunity to bring the new capabilities of ubiquitous machine intelligence and trustworthy AI principles, as well as digital trust, together in various engineering systems design to ensure the trustworthiness of systems in Industry 4.0.
translated by 谷歌翻译